6 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2023 Journal article Open Access OPEN
Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
Metta C., Beretta A., Guidotti R., Yin Y., Gallinari P., Rinzivillo S., Giannotti F.
A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we aim at improving the trust and confidence of users towards automatic AI decision systems in the field of medical skin lesion diagnosis by customizing an existing XAI approach for explaining an AI model able to recognize different types of skin lesions. The explanation is generated through the use of synthetic exemplar and counter-exemplar images of skin lesions and our contribution offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A validation survey with domain experts, beginners, and unskilled people shows that the use of explanations improves trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon may stem from the intrinsic characteristics of each class and may help resolve common misclassifications made by human experts.Source: International Journal of Data Science and Analytics (Print) (2023). doi:10.1007/s41060-023-00401-z
DOI: 10.1007/s41060-023-00401-z
Project(s): TAILOR via OpenAIRE, HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: International Journal of Data Science and Analytics Open Access | link.springer.com Open Access | ISTI Repository Open Access | CNR ExploRA


2023 Conference article Open Access OPEN
The explanation dialogues: understanding how legal experts reason about XAI methods
State L., Bringas Colmenarejo A., Beretta A., Ruggieri S., Turini F., Law S.
The Explanation Dialogues project is an expert focus study that aims to uncover expectations, reasoning, and rules of legal experts and practitioners towards explainable artificial intelligence (XAI). We examine legal perceptions and disputes that arise in a fictional scenario that resembles a daily life situation - a bank's use of an automated decision-making (ADM) system to decide on credit allocation to individuals. Through this simulation, the study aims to provide insights into the legal value and validity of explanations of ADMs, identify potential gaps and issues that may arise in the context of compliance with European legislation, and provide guidance on how to address these shortcomings.Source: EWAF'23 - European Workshop on Algorithmic Fairness, Winterthur, Switzerland, 07-09/06/2023
Project(s): XAI via OpenAIRE

See at: ceur-ws.org Open Access | ISTI Repository Open Access | CNR ExploRA


2023 Conference article Restricted
Interpretable data partitioning through tree-based clustering methods
Guidotti R., Landi C., Beretta A., Fadda D., Nanni M.
Interpretable Data Partitioning Through Tree-Based Clustering Methods Riccardo Guidotti, Cristiano Landi, Andrea Beretta, Daniele Fadda & Mirco Nanni Conference paper First Online: 08 October 2023 311 Accesses Part of the Lecture Notes in Computer Science book series (LNAI,volume 14276) The growing interpretable machine learning research field is mainly focusing on the explanation of supervised approaches. However, also unsupervised approaches might benefit from considering interpretability aspects. While existing clustering methods only provide the assignment of records to clusters without justifying the partitioning, we propose tree-based clustering methods that offer interpretable data partitioning through a shallow decision tree. These decision trees enable easy-to-understand explanations of cluster assignments through short and understandable split conditions. The proposed methods are evaluated through experiments on synthetic and real datasets and proved to be more effective than traditional clustering approaches and interpretable ones in terms of standard evaluation measures and runtime. Finally, a case study involving human participation demonstrates the effectiveness of the interpretable clustering trees returned by the proposed method.Source: DS 2023 - 26th International Conference on Discovery Science, pp. 492–507, Porto, Portugal, 09-11/10/2023
DOI: 10.1007/978-3-031-45275-8_33
Metrics:


See at: doi.org Restricted | link.springer.com Restricted | CNR ExploRA


2023 Journal article Open Access OPEN
Co-design of human-centered, explainable AI for clinical decision support
Panigutti C., Beretta A., Fadda D., Giannotti F., Pedreschi D., Perotti A., Rinzivillo S.
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback with a two-fold outcome: First, we obtain evidence that explanations increase users' trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so we can re-design a better, more human-centered explanation interface.Source: ACM transactions on interactive intelligent systems (Online) 13 (2023). doi:10.1145/3587271
DOI: 10.1145/3587271
Project(s): HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE
Metrics:


See at: dl.acm.org Open Access | Archivio istituzionale della Ricerca - Scuola Normale Superiore Open Access | ISTI Repository Open Access | ACM Transactions on Interactive Intelligent Systems Restricted | CNR ExploRA


2022 Conference article Open Access OPEN
Detecting addiction, anxiety, and depression by users psychometric profiles
Monreale A., Iavarone B., Rossetto E., Beretta A.
Detecting and characterizing people with mental disorders is an important task that could help the work of different healthcare professionals. Sometimes, a diagnosis for specific mental disorders requires a long time, possibly causing problems because being diagnosed can give access to support groups, treatment programs, and medications that might help the patients. In this paper, we study the problem of exploiting supervised learning approaches, based on users' psychometric profiles extracted from Reddit posts, to detect users dealing with Addiction, Anxiety, and Depression disorders. The empirical evaluation shows an excellent predictive power of the psychometric profile and that features capturing the post's content are more effective for the classification task than features describing the user writing style. We achieve an accuracy of 96% using the entire psychometric profile and an accuracy of 95% when we exclude from the user profile linguistic features.Source: WWW '22 - The ACM Web Conference 2022, pp. 1189–1197, Virtual Event, Lyon France, 25-29/04/2022
DOI: 10.1145/3487553.3524918
Project(s): TAILOR via OpenAIRE, HumanE-AI-Net via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | dl.acm.org Restricted | CNR ExploRA


2022 Conference article Open Access OPEN
Follow the flow: a prospective on the on-line detection of flow mental state through machine learning
Sajno E., Beretta A., Novielli N., Riva G.
Flow is a precious mental status for achieving high sports performance. It is defined as an emotional state with high valence and high arousal levels. However, a viable detection system that could provide information about it in real-time is not yet recognized. The prospective work presented here aims to the creation of an online flow detection framework. A supervised machine learning model will be trained to predict valence and arousal levels, both on already existing databases and freshly collected physiological data. As final result, the definition of the minimally expensive (both in terms of sensors and time) amount of data needed to predict a flow status will enable the creation of a real-time detection interface of flow.Source: MetroXRAINE 2022 - IEEE International Workshop on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering, pp. 217–222, Rome, Italy, 26-28/10/2022
DOI: 10.1109/metroxraine54828.2022.9967605
DOI: 10.31234/osf.io/9z5pe
Metrics:


See at: doi.org Open Access | ISTI Repository Open Access | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA